In this notebook, we’ll recreate a style transfer method that is outlined in the paper, Image Style Transfer Using Convolutional Neural Networks, by Gatys in PyTorch.
In this paper, style transfer uses the features found in the 19-layer VGG Network, which is comprised of a series of convolutional and pooling layers, and a few fully-connected layers. In the image below, the convolutional layers are named by stack and their order in the stack. Conv_1_1 is the first convolutional layer that an image is passed through, in the first stack. Conv_2_1 is the first convolutional layer in the second stack. The deepest convolutional layer in the network is conv_5_4.
Style transfer relies on separating the content and style of an image. Given one content image and one style image, we aim to create a new, target image which should contain our desired content and style components:
An example is shown below, where the content image is of a cat, and the style image is of Hokusai's Great Wave. The generated target image still contains the cat but is stylized with the waves, blue and beige colors, and block print textures of the style image!
In this notebook, we'll use a pre-trained VGG19 Net to extract content or style features from a passed in image. We'll then formalize the idea of content and style losses and use those to iteratively update our target image until we get a result that we want. You are encouraged to use a style and content image of your own and share your work on Twitter with @udacity; we'd love to see what you come up with!
In [1]:
%matplotlib inline
from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.optim as optim
from torchvision import transforms, models
from tqdm import tqdm_notebook as tqdm
In [2]:
# Get the "features" portion of VGG19 (we will not need the "classifier" portion)
vgg = models.vgg19(pretrained=True).features
# Freeze all VGG parameters since we're only optimizing the target image
for param in vgg.parameters():
param.requires_grad_(False)
In [3]:
# Move the model to GPU, if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
vgg.to(device)
Out[3]:
You can load in any images you want! Below, we've provided a helper function for loading in any type and size of image. The load_image
function also converts images to normalized Tensors.
Additionally, it will be easier to have smaller images and to squish the content and style images so that they are of the same size.
In [4]:
def load_image(img_path: str,
max_size=400,
shape=None):
''' Load in and transform an image, making sure the image
is <= 400 pixels in the x-y dims.'''
image = Image.open(img_path).convert('RGB')
# Large images will slow down processing
if max(image.size) > max_size:
size = max_size
else:
size = max(image.size)
if shape is not None:
size = shape
in_transform = transforms.Compose([
transforms.Resize(size),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406),
(0.229, 0.224, 0.225))])
# Discard the transparent, alpha channel (that's the :3) and add the batch dimension
image = in_transform(image)[:3,:,:].unsqueeze(0)
return image
Next, I'm loading in images by file name and forcing the style image to be the same size as the content image.
In [5]:
# Load in content and style image
content = load_image('images/octopus.jpg').to(device)
# Resize style to match content, makes code easier
style = load_image('images/hockney.jpg',
shape=content.shape[-2:]).to(device)
In [6]:
# Helper function for un-normalizing an image
# and converting it from a Tensor image to a NumPy image for display
def im_convert(tensor: np.ndarray):
""" Display a tensor as an image. """
image = tensor.to("cpu").clone().detach()
image = image.numpy().squeeze()
image = image.transpose(1,2,0)
image = image * np.array((0.229, 0.224, 0.225)) + np.array((0.485, 0.456, 0.406))
image = image.clip(0, 1)
return image
In [7]:
# Display the images
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10))
# content and style ims side-by-side
ax1.imshow(im_convert(content))
ax2.imshow(im_convert(style))
Out[7]:
In [8]:
# Print out VGG19 structure so you can see the names of various layers
# Print(vgg)
vgg
Out[8]:
In [9]:
def get_features(image,
model,
layers=None):
""" Run an image forward through a model and get the features for
a set of layers. Default layers are for VGGNet matching Gatys et al (2016)
"""
## TODO: Complete mapping layer names of PyTorch's VGGNet to names from the paper
## Need the layers for the content and style representations of an image
if layers is None:
layers = {'0': 'conv1_1',
'5': 'conv2_1',
'10': 'conv3_1',
'19': 'conv4_1',
'21': 'conv4_2',
'28': 'conv5_1'}
## -- do not need to change the code below this line -- ##
features = {}
x = image
# model._modules is a dictionary holding each module in the model
for name, layer in model._modules.items():
x = layer(x)
if name in layers:
features[layers[name]] = x
return features
The output of every convolutional layer is a Tensor with dimensions associated with the batch_size
, a depth, d
and some height and width (h
, w
). The Gram matrix of a convolutional layer can be calculated as follows:
batch_size, d, h, w = tensor.size
Note: You can multiply two matrices using torch.mm(matrix1, matrix2)
.
gram_matrix
function.
In [10]:
def gram_matrix(tensor):
""" Calculate the Gram Matrix of a given tensor
Gram Matrix: https://en.wikipedia.org/wiki/Gramian_matrix
"""
## Get the batch_size, depth, height, and width of the Tensor
## Reshape it, so we're multiplying the features for each channel
## Calculate the gram matrix
_, d, h, w = tensor.size()
# Flattening the height and width
tensor = tensor.view(d, h * w)
# Calculating the gramian matrix
gram = torch.mm(tensor, tensor.t())
return gram
In [11]:
# Get content and style features only once before forming the target image
content_features = get_features(content, vgg)
style_features = get_features(style, vgg)
# Calculate the gram matrices for each layer of our style representation
style_grams = {layer: gram_matrix(style_features[layer]) for layer in style_features}
# Create a third "target" image and prep it for change
# It is a good idea to start of with the target as a copy of our *content* image
# then iteratively change its style
target = content.clone().requires_grad_(True).to(device)
Below, you are given the option to weight the style representation at each relevant layer. It's suggested that you use a range between 0-1 to weight these layers. By weighting earlier layers (conv1_1
and conv2_1
) more, you can expect to get larger style artifacts in your resulting, target image. Should you choose to weight later layers, you'll get more emphasis on smaller features. This is because each layer is a different size and together they create a multi-scale style representation!
Just like in the paper, we define an alpha (content_weight
) and a beta (style_weight
). This ratio will affect how stylized your final image is. It's recommended that you leave the content_weight = 1 and set the style_weight to achieve the ratio you want.
In [12]:
# Weights for each style layer
# Weighting earlier layers more will result in *larger* style artifacts
# Notice we are excluding `conv4_2` our content representation
style_weights = {'conv1_1': 1.,
'conv2_1': 0.8,
'conv3_1': 0.5,
'conv4_1': 0.3,
'conv5_1': 0.1}
# You may choose to leave these as is
content_weight = 1 # alpha
style_weight = 1e6 # beta
You'll decide on a number of steps for which to update your image, this is similar to the training loop that you've seen before, only we are changing our target image and nothing else about VGG19 or any other image. Therefore, the number of steps is really up to you to set! I recommend using at least 2000 steps for good results. But, you may want to start out with fewer steps if you are just testing out different weight values or experimenting with different images.
Inside the iteration loop, you'll calculate the content and style losses and update your target image, accordingly.
The content loss will be the mean squared difference between the target and content features at layer conv4_2
. This can be calculated as follows:
content_loss = torch.mean((target_features['conv4_2'] - content_features['conv4_2'])**2)
The style loss is calculated in a similar way, only you have to iterate through a number of layers, specified by name in our dictionary style_weights
.
You'll calculate the gram matrix for the target image,
target_gram
and style imagestyle_gram
at each of these layers and compare those gram matrices, calculating thelayer_style_loss
. Later, you'll see that this value is normalized by the size of the layer.
Finally, you'll create the total loss by adding up the style and content losses and weighting them with your specified alpha and beta!
Intermittently, we'll print out this loss; don't be alarmed if the loss is very large. It takes some time for an image's style to change and you should focus on the appearance of your target image rather than any loss value. Still, you should see that this loss decreases over some number of iterations.
In [13]:
# For displaying the target image, intermittently
show_every = 400
# Iteration hyperparameters
optimizer = optim.Adam(params=[target],
lr=0.005)
# Decide how many iterations to update your image (5000)
steps = 400
for ii in tqdm(range(1, steps + 1)):
## TODO: get the features from your target image
## Then calculate the content loss
target_features = get_features(image=target,
model=vgg)
content_loss = torch.mean((target_features['conv4_2'] - content_features['conv4_2']) ** 2)
# The style loss
# Initialize the style loss to 0
style_loss = 0
# Iterate through each style layer and add to the style loss
for layer in style_weights:
# Get the "target" style representation for the layer
target_feature = target_features[layer]
_, d, h, w = target_feature.shape
## TODO: Calculate the target gram matrix
target_gram = gram_matrix(target_feature)
## TODO: Get the "style" style representation
style_gram = style_grams[layer]
## TODO: Calculate the style loss for one layer, weighted appropriately
layer_style_loss = style_weights[layer] * torch.mean((target_gram - style_gram) ** 2)
# Add to the style loss
style_loss += layer_style_loss / (d * h * w)
## TODO: Calculate the *total* loss
total_loss = content_weight * content_loss + style_weight * style_loss
## -- do not need to change code, below -- ##
# Update your target image
optimizer.zero_grad()
total_loss.backward()
optimizer.step()
# Display intermediate images and print the loss
if ii % show_every == 0:
print('Total loss: ', total_loss.item())
plt.imshow(im_convert(target))
plt.show()
In [14]:
# Display content and final, target image
fig, (ax1, ax2) = plt.subplots(1, 2,
figsize=(20, 10))
ax1.imshow(im_convert(content))
ax2.imshow(im_convert(target))
Out[14]: